Reduced Precision DWC: An Efficient Hardening Strategy for Mixed-Precision Architectures

نویسندگان

چکیده

Duplication with Comparison (DWC) is an effective software-level solution to improve the reliability of computing devices. However, it introduces performance and energy consumption overheads that could be unsuitable for high-performance or real-time safety-critical applications. In this article, we present Reduced-Precision (RP-DWC) as a means lower overhead DWC by executing redundant copy in reduced precision. RP-DWC particularly suitable modern mixed-precision architectures, such NVIDIA GPUs, feature dedicated functional units programmable accuracy. We discuss benefits challenges associated show intrinsic difference between copies allows detecting most, but not all, errors. undetected faults are ones fall into precisions, they produce much smaller impact on application output and, thus, might tolerated. investigate fault detection, performance, Volta GPUs. Through injection beam experiment, using three microbenchmarks four real applications, achieves excellent coverage (up 86 percent) minimal (as low 0.1 percent time 24 overhead).

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Mixed-Precision Memcomputing

As the CMOS scaling laws break down because of technological limits, a radical departure from the processor-memory dichotomy is needed to circumvent the limitations of today’s computers. In-memory computing is a promising concept in which the physical attributes and state dynamics of nanoscale resistive memory devices organized in a computational memory unit are exploited to perform computation...

متن کامل

Mixed Precision Vector Processors

Mixed-Precision Vector Processors

متن کامل

Mixed Precision Training

Increasing the size of a neural network typically improves accuracy but also increases the memory and compute requirements for training the model. We introduce methodology for training deep neural networks using half-precision floating point numbers, without losing model accuracy or having to modify hyperparameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmeti...

متن کامل

Generic-Precision algorithm for DCT-Cordic architectures

In this paper we propose a generic algorithm to calculate the rotation parameters of CORDIC angles required for the Discrete Cosine Transform algorithm (DCT). This leads us to increase the precision of calculation meeting any accuracy.Our contribution is to use this decomposition in CORDIC based DCT which is appropriate for domains which require high quality and top precision. We then propose a...

متن کامل

Wrpn: Wide Reduced-precision Networks

For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However,...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Computers

سال: 2022

ISSN: ['1557-9956', '2326-3814', '0018-9340']

DOI: https://doi.org/10.1109/tc.2021.3058872